Goto

Collaborating Authors

 malicious payload


Real-time ML-based Defense Against Malicious Payload in Reconfigurable Embedded Systems

Stahle-Smith, Rye, Karakchi, Rasha

arXiv.org Artificial Intelligence

The growing use of FPGAs in reconfigurable systems introducessecurity risks through malicious bitstreams that could cause denial-of-service (DoS), data leakage, or covert attacks. We investigated chip-level hardware malicious payload in embedded systems and proposed a supervised machine learning method to detect malicious bitstreams via static byte-level features. Our approach diverges from existing methods by analyzing bitstreams directly at the binary level, enabling real-time detection without requiring access to source code or netlists. Bitstreams were sourced from state-of-the-art (SOTA) benchmarks and re-engineered to target the Xilinx PYNQ-Z1 FPGA Development Board. Our dataset included 122 samples of benign and malicious configurations. The data were vectorized using byte frequency analysis, compressed using TSVD, and balanced using SMOTE to address class imbalance. The evaluated classifiers demonstrated that Random Forest achieved a macro F1-score of 0.97, underscoring the viability of real-time Trojan detection on resource-constrained systems. The final model was serialized and successfully deployed via PYNQ to enable integrated bitstream analysis.


Malicious AI Models Undermine Software Supply-Chain Security

Communications of the ACM

Integrating malicious AI models6 into software supply chains presents a significant and emerging threat to cybersecurity. The attackers aim to embed malicious AI models in software components and widely used tools, thereby infiltrating systems at a foundational level. Once integrated, the malicious AI models execute embedded unauthorized code, which performs actions such as exfiltrating sensitive data, manipulating data integrity, or enabling unauthorized access to critical systems. Compromised development tools, tampered libraries, and pre-trained models are the primary methods of introducing malicious AI models into the software supply chain. Developers often rely on libraries and frameworks to import pre-trained AI models to expedite software development.


A Large-Scale Exploit Instrumentation Study of AI/ML Supply Chain Attacks in Hugging Face Models

Casey, Beatrice, Santos, Joanna C. S., Mirakhorli, Mehdi

arXiv.org Artificial Intelligence

The development of machine learning (ML) techniques has led to ample opportunities for developers to develop and deploy their own models. Hugging Face serves as an open source platform where developers can share and download other models in an effort to make ML development more collaborative. In order for models to be shared, they first need to be serialized. Certain Python serialization methods are considered unsafe, as they are vulnerable to object injection. This paper investigates the pervasiveness of these unsafe serialization methods across Hugging Face, and demonstrates through an exploitation approach, that models using unsafe serialization methods can be exploited and shared, creating an unsafe environment for ML developers. We investigate to what extent Hugging Face is able to flag repositories and files using unsafe serialization methods, and develop a technique to detect malicious models. Our results show that Hugging Face is home to a wide range of potentially vulnerable models.


An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection

Yan, Shenao, Wang, Shen, Duan, Yue, Hong, Hanbin, Lee, Kiho, Kim, Doowon, Hong, Yuan

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can covertly alter the model outputs. To address this critical security challenge, we introduce CodeBreaker, a pioneering LLM-assisted backdoor attack framework on code completion models. Unlike recent attacks that embed malicious payloads in detectable or irrelevant sections of the code (e.g., comments), CodeBreaker leverages LLMs (e.g., GPT-4) for sophisticated payload transformation (without affecting functionalities), ensuring that both the poisoned data for fine-tuning and generated code can evade strong vulnerability detection. CodeBreaker stands out with its comprehensive coverage of vulnerabilities, making it the first to provide such an extensive set for evaluation. Our extensive experimental evaluations and user studies underline the strong attack performance of CodeBreaker across various settings, validating its superiority over existing approaches. By integrating malicious payloads directly into the source code with minimal transformation, CodeBreaker challenges current security measures, underscoring the critical need for more robust defenses for code completion.


Malvertising in Google search results delivering stealers

#artificialintelligence

In recent months, we observed an increase in the number of malicious campaigns that use Google Advertising as a means of distributing and delivering malware. At least two different stealers, Rhadamanthys and RedLine, were abusing the search engine promotion plan in order to deliver malicious payloads to victims' machines. They seem to use the same technique of mimicking a website associated with well-known software like Notepad and Blender 3D. The treat actors create copies of legit software websites while employing typosquatting (exploiting incorrectly spelled popular brands and company names as URLs) or combosquatting (using popular brands and company names combined with arbitrary words as URLs) to make the sites look like the real thing to the end user--the domain names allude to the original software or vendor. The design and the content of the fake web pages look the same as those of the original ones.


Computer vision can help spot cyber threats with startling accuracy

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. The last decade's growing interest in deep learning was triggered by the proven capacity of neural networks in computer vision tasks. If you train a neural network with enough labeled photos of cats and dogs, it will be able to find recurring patterns in each category and classify unseen images with decent accuracy. What else can you do with an image classifier? In 2019, a group of cybersecurity researchers wondered if they could treat security threat detection as an image classification problem.


How AI Will Supercharge Spear-Phishing

#artificialintelligence

Imagine a strain of malware hidden on your colleague's computer. It watches their every move, quietly listening and learning as it sifts through their email, calendar, and messages. In the process, it doesn't just learn their writing style. It learns the unique way they interact with nearly everyone in their life. It picks up on the inside jokes they share with their spouse.


The Looming Rise of AI-Powered Malware

#artificialintelligence

In the past two years, we've learned that machine learning algorithms can manipulate public opinion, cause fatal car crashes, create fake porn, and manifest extremely sexist and racist behavior. And now, the cybersecurity threats of deep learning and neural networks are emerging. We're just beginning to catch glimpses of a future in which cybercriminals trick neural networks into making fatal mistakes and use deep learning to hide their malware and find their target among millions of users. Part of the challenge of securing artificial intelligence applications lies in the fact it's hard to explain how they work, and even the people who create them are often hard-pressed to make sense of their inner workings. But unless we prepare ourselves for what is to come, we'll learn to appreciate and react to these threats the hard way.


Researchers Developed Artificial Intelligence-Powered Stealthy Malware

#artificialintelligence

Artificial Intelligence (AI) has been seen as a potential solution for automatically detecting and combating malware, and stop cyber attacks before they affect any organization. However, the same technology can also be weaponized by threat actors to power a new generation of malware that can evade even the best cyber-security defenses and infects a computer network or launch an attack only when the target's face is detected by the camera. To demonstrate this scenario, security researchers at IBM Research came up with DeepLocker--a new breed of "highly targeted and evasive" attack tool powered by AI," which conceals its malicious intent until it reached a specific victim. According to the IBM researcher, DeepLocker flies under the radar without being detected and "unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition." Describing it as the "spray and pray" approach of traditional malware, researchers believe that this kind of stealthy AI-powered malware is particularly dangerous because, like nation-state malware, it could infect millions of systems without being detected. The malware can hide its malicious payload in benign carrier applications, like video conferencing software, to avoid detection by most antivirus and malware scanners until it reaches specific victims, who are identified via indicators such as voice recognition, facial recognition, geolocation and other system-level features. Also Read: Artificial Intelligence Based System That Can Detect 85% of Cyber Attacks "What is unique about DeepLocker is that the use of AI makes the "trigger conditions" to unlock the attack almost impossible to reverse engineer," the researchers explain. "The malicious payload will only be unlocked if the intended target is reached." To demonstrate DeepLocker's capabilities, the researchers designed a proof of concept, camouflaging well-known WannaCry ransomware in a video conferencing app so that it remains undetected by security tools, including antivirus engines and malware sandboxes. With the built-in triggering condition, DeepLocker did not unlock and execute the ransomware on the system until it recognized the face of the target, which can be matched using publicly available photos of the target. "Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms.